full moba game
Towards Playing Full MOBA Games with Deep Reinforcement Learning
MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i.e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes. As a result, full MOBA games without restrictions are far from being mastered by any existing AI system. In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning. Specifically, we develop a combination of novel and existing learning techniques, including off-policy adaption, multi-head value estimation, curriculum self-play learning, policy distillation, and Monte-Carlo tree-search, in training and playing a large pool of heroes, meanwhile addressing the scalability issue skillfully. Tested on Honor of Kings, a popular MOBA game, we show how to build superhuman AI agents that can defeat top esports players. The superiority of our AI is demonstrated by the first large-scale performance test of MOBA AI agent in the literature.
Review for NeurIPS paper: Towards Playing Full MOBA Games with Deep Reinforcement Learning
Additional Feedback: some details: p1 - define MOBA in the abstract - such as multi-agent, grammar - attention already reference(s)? Why is this no longer computationally feasible exactly? - Note that there still lacks a ... grammar - [Our] AI [system] achieved ... - How many players is the "top 0.04%"? The paper requires a thourough proofreading effort to make it publishable in the NeuRIPS proceedings. How important is this incorporated expert knowledge? "s.t." is commonly used as definition or optimization constraint.
Review for NeurIPS paper: Towards Playing Full MOBA Games with Deep Reinforcement Learning
This paper demonstrates an application of RL and search to a challenging MOBA game-playing task, leading to AI agents able to defeat top professional human players. Three out of four reviewers consider that although this is an application-oriented paper with a strong engineering focus, it is still relevant enough for publication at NeurIPS. Only R2 is advocating for rejection, based essentially on the lack of scientific novelty. I believe that such impressive large scale applications of RL are well worth pushing forward and I am thus recommending acceptance. The general algorithms being used may not be novel, but their instantiation to solve this specific task largely is.
Towards Playing Full MOBA Games with Deep Reinforcement Learning
MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i.e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes. As a result, full MOBA games without restrictions are far from being mastered by any existing AI system. In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning. Specifically, we develop a combination of novel and existing learning techniques, including off-policy adaption, multi-head value estimation, curriculum self-play learning, policy distillation, and Monte-Carlo tree-search, in training and playing a large pool of heroes, meanwhile addressing the scalability issue skillfully.